Ilya Sutskever (; born 8 December 1986) is an Israeli-Canadian computer scientist who specializes in machine learning. He has made several major contributions to the field of deep learning. With Alex Krizhevsky and Geoffrey Hinton, he co-invented AlexNet, a convolutional neural network.
Sutskever co-founded and was a former chief scientist at OpenAI. In 2023, he was one of the members of OpenAI's board that ousted Sam Altman from his position as the organization's CEO; Altman was reinstated a week later, and Sutskever stepped down from the board. In June 2024, Sutskever co-founded the company Safe Superintelligence alongside Daniel Gross and Daniel Levy.
At the University of Toronto, Sutskever received a Bachelor of Science in mathematics in 2005, a Master of Science in computer science in 2007, and a Doctor of Philosophy in computer science in 2013. His doctoral advisor was Geoffrey Hinton.
In 2012, Sutskever built AlexNet in collaboration with Geoffrey Hinton and Alex Krizhevsky. To support AlexNet's computing demands, he bought many GTX 580 GPUs online.
At Google Brain, Sutskever worked with Oriol Vinyals and Quoc Viet Le to create the sequence-to-sequence learning algorithm, and worked on TensorFlow. He is also one of the AlphaGo paper's many co-authors.
At the end of 2015, Sutskever left Google to become cofounder and chief scientist of the newly founded organization OpenAI.
In 2022, Sutskever tweeted, "it may be that today's large neural networks are slightly conscious", which triggered debates about AI consciousness. He is considered to have played a key role in the development of ChatGPT. In 2023, he announced that he would co-lead OpenAI's new "Superalignment" project, which is trying to solve the AI alignment of superintelligences within four years. He wrote that even if superintelligence seems far off, it could happen this decade.
Sutskever was formerly one of the six board members of the nonprofit entity that controls OpenAI. On November 17, 2023, the board fired Sam Altman, saying that "he was not consistently candid in his communications with the board". The Information speculated that the decision was partly driven by conflict over the extent to which the company should commit to AI safety. In an all-hands company meeting shortly after the board meeting, Sutskever said that firing Altman was "the board doing its duty", but the next week, he expressed regret at having participated in Altman's ouster. Altman's firing and OpenAI's co-founder Greg Brockman resignation led three senior researchers to resign from OpenAI. After that, Sutskever stepped down from the OpenAI board and was absent from OpenAI's office. Some sources suggested he was leading the team remotely, while others said he no longer had access to the team's work.
In May 2024, Sutskever announced his departure from OpenAI to focus on a new project that was "very personally meaningful" to him. His decision followed a turbulent period at OpenAI marked by leadership crises and internal debates about the direction of AI development and AI alignment protocols. Jan Leike, the other leader of the superalignment project, announced his departure hours later, citing an erosion of safety and trust in OpenAI's leadership.
In June 2024, Sutskever announced Safe Superintelligence Inc., a new company he founded with Daniel Gross and Daniel Levy with offices in Palo Alto and Tel Aviv. In contrast to OpenAI, which releases revenue-generating products, Sutskever said the new company's "first product will be the safe superintelligence, and it will not do anything else up until then". In September 2024, the company announced that it had raised $1 billion from venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. In March 2025, Safe Superintelligence Inc. raised $2 billion more and reportedly reached a $32 billion valuation, notably due to Sutskever's reputation.
In an October 2024 interview after winning the Nobel Prize in Physics, Geoffrey Hinton expressed support for Sutskever's decision to fire Altman, emphasizing concerns about AI safety.
Awards and honors
|
|